explicit mapping
Reviews: Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting
This paper studies the problem of learning from biased training data (i.e. This covers notably the case of class imbalance and noisy label. The proposed meta-weight-net is an MLP with one hidden layer that learns a mapping from training loss of a sample to its weight. Minimizing the training objectives naturally leads us to focus more on samples that agree with the meat-knowledge. Theoretically it is shown that the algorithm converges to critical points of the loss under classical assumptions (but I am quite confused by the proof, see below).
Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting
Current deep neural networks(DNNs) can easily overfit to biased training data with corrupted labels or class imbalance. Sample re-weighting strategy is commonly used to alleviate this issue by designing a weighting function mapping from training loss to sample weight, and then iterating between weight recalculating and classifier updating. Current approaches, however, need manually pre-specify the weighting function as well as its additional hyper-parameters. It makes them fairly hard to be generally applied in practice due to the significant variation of proper weighting schemes relying on the investigated problem and training data. To address this issue, we propose a method capable of adaptively learning an explicit weighting function directly from data.
An Exact Finite-dimensional Explicit Feature Map for Kernel Functions
Ghiasi-Shirazi, Kamaledin, Qaraei, Mohammadreza
Kernel methods in machine learning use a kernel function that takes two data points as input and returns their inner product after mapping them to a Hilbert space, implicitly and without actually computing the mapping. For many kernel functions, such as Gaussian and Laplacian kernels, the feature space is known to be infinite-dimensional, making operations in this space possible only implicitly. This implicit nature necessitates algorithms to be expressed using dual representations and the kernel trick. In this paper, given an arbitrary kernel function, we introduce an explicit, finite-dimensional feature map for any arbitrary kernel function that ensures the inner product of data points in the feature space equals the kernel function value, during both training and testing. The existence of this explicit mapping allows for kernelized algorithms to be formulated in their primal form, without the need for the kernel trick or the dual representation. As a first application, we demonstrate how to derive kernelized machine learning algorithms directly, without resorting to the dual representation, and apply this method specifically to PCA. As another application, without any changes to the t-SNE algorithm and its implementation, we use it for visualizing the feature space of kernel functions.
- Europe > Finland > Uusimaa > Helsinki (0.04)
- Asia > Middle East > Iran > Razavi Khorasan Province > Mashhad (0.04)
Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting
Shu, Jun, Xie, Qi, Yi, Lixuan, Zhao, Qian, Zhou, Sanping, Xu, Zongben, Meng, Deyu
Current deep neural networks(DNNs) can easily overfit to biased training data with corrupted labels or class imbalance. Sample re-weighting strategy is commonly used to alleviate this issue by designing a weighting function mapping from training loss to sample weight, and then iterating between weight recalculating and classifier updating. Current approaches, however, need manually pre-specify the weighting function as well as its additional hyper-parameters. It makes them fairly hard to be generally applied in practice due to the significant variation of proper weighting schemes relying on the investigated problem and training data. To address this issue, we propose a method capable of adaptively learning an explicit weighting function directly from data.